-
-
Notifications
You must be signed in to change notification settings - Fork 986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop the support for PyTorch<2.0 #3272
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
I can't reproduce this failing test locally and there is not enough information in the logs to debug it. Does anyone have any suggestions how to fix it? The changes in this PR should be unrelated to this test. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing this! Just a few comments.
profiler/gaussianhmm.py
Outdated
@@ -21,7 +21,8 @@ def random_mvn(batch_shape, dim, requires_grad=False): | |||
|
|||
def main(args): | |||
if args.cuda: | |||
torch.set_default_tensor_type("torch.cuda.FloatTensor") | |||
torch.set_default_device("cuda") | |||
torch.set_default_dtype(torch.float32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the set_default_dtype
needed here? I see it omitted in other changes.
tests/common.py
Outdated
new_module = "torch.cuda" if host == "cuda" else "torch" | ||
torch.set_default_tensor_type("{}.{}".format(new_module, name)) | ||
old_host = torch.Tensor().device | ||
torch.set_default_device(host) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Could we move the torch.set_default_device(host)
into the try
block? I realize this was wrong before your PR, but seems like a good time to fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this context manager at all? How is it different from with torch.device(device)
? https://pytorch.org/docs/stable/generated/torch.set_default_device.html
To only temporarily change the default device instead of setting it globally, use with torch.device(device): instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh it would be great to replace this custom context manager with torch.device
, as long as torch.device
can be used as a context manager in our earliest supported torch version 1.11.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like torch.set_default_device
is new in PyTorch 1.12:
https://github.com/pytorch/pytorch/releases
Introduced in this PR 9 month ago:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, well let's keep the polyfill until we drop support for torch==1.11.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify: this also applies to torch.set_default_device
throughout this PR and we if we want to keep torch 1.11 support then there will be a lot of if/else statements between torch.set_default_device
and torch.set_default_tensor_type
to set the device
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm.. it looks like set_default_device
is not exposed or not available even in torch 1.13
https://pytorch.org/docs/1.13/search.html?q=set_default&check_keywords=yes&area=default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pyro has always aimed at being more stable than torch, and we have historically implemented polyfills in Pyro to smooth over PyTorch's move-fast-and-break-things attitude. If I had time, I'd implement polyfills like a pyro.util.device()
context manager, a pyro.util.set_default_device()
helper. etc. But I don't have time, and maybe it's time to drop this aim as our maintenance resources dwindle. 🤷
The motivation for being very stable is to avoid giving people headaches. Every time we drop support for some version of some underlying library, some grad student wastes a day trying to install an old repo whose author has graduated and didn't pin versions. Every time we pin a library version, some software engineer at BigCo wastes a week solving dependency issues between conflicting libraries with non-overlapping version pins, spending a day committing to an upstream repo maintained by an overcommitted professor, building polyfills around another dependency (who doesn't explicitly pin versions but actually depends on a version outside our range, which it took haf a day to figure out), and replacing a third library that is no longer maintained.
If you do decide to drop torch 1.11 support, could you update the version pins everywhere and update the python supported versions? And we'll bump the minor version in our next release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm.. it looks like
set_default_device
is not exposed or not available even in torch 1.13https://pytorch.org/docs/1.13/search.html?q=set_default&check_keywords=yes&area=default
I confirmed this by installing it locally:
>>> import torch
>>> torch.__version__
'1.13.0+cu117'
>>> torch.set_default_device
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'torch' has no attribute 'set_default_device'
Can you confirm that it's ok to drop the support for all torch 1.11, 1.12, and 1.13?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, let's just be sure to announce in our release notes and bump the minor version.
if args.double: | ||
torch.set_default_dtype(torch.float64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete these two new lines (they are unnecessary given line 858 above)
@fritzo addressed your comments and added the description to the PR |
torch.set_default_tensor_type
has been deprecated in PyTorch 2.1There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
Thanks for reviewing @fritzo ! |
This PR drops the support for torch<2.0
torch.set_default_tensor_type
(deprecated in torch=2.1) intotorch.set_default_dtype
andtorch.set_default_device
_torch_scheduler_base
definitionpillow>=8.3.1
since the issue in 8.3 has been fixed in 8.3.1torch>=2.0
andtorchvision>=0.15.0
tensors_default_to
and replaced it withwith torch.device
CVAE
example #3273)pytorch_lightning
tolightning.pytorch
(preferred)